20 research outputs found

    Symbolic Logic meets Machine Learning: A Brief Survey in Infinite Domains

    Get PDF
    The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence (AI). The deduction camp concerns itself with questions about the expressiveness of formal languages for capturing knowledge about the world, together with proof systems for reasoning from such knowledge bases. The learning camp attempts to generalize from examples about partial descriptions about the world. In AI, historically, these camps have loosely divided the development of the field, but advances in cross-over areas such as statistical relational learning, neuro-symbolic systems, and high-level control have illustrated that the dichotomy is not very constructive, and perhaps even ill-formed. In this article, we survey work that provides further evidence for the connections between logic and learning. Our narrative is structured in terms of three strands: logic versus learning, machine learning for logic, and logic for machine learning, but naturally, there is considerable overlap. We place an emphasis on the following "sore" point: there is a common misconception that logic is for discrete properties, whereas probability theory and machine learning, more generally, is for continuous properties. We report on results that challenge this view on the limitations of logic, and expose the role that logic can play for learning in infinite domains

    Suppression of interferon gene expression overcomes resistance to MEK inhibition in KRAS-mutant colorectal cancer.

    Get PDF
    Despite showing clinical activity in BRAF-mutant melanoma, the MEK inhibitor (MEKi) trametinib has failed to show clinical benefit in KRAS-mutant colorectal cancer. To identify mechanisms of resistance to MEKi, we employed a pharmacogenomic analysis of MEKi-sensitive versus MEKi-resistant colorectal cancer cell lines. Strikingly, interferon- and inflammatory-related gene sets were enriched in cell lines exhibiting intrinsic and acquired resistance to MEK inhibition. The bromodomain inhibitor JQ1 suppressed interferon-stimulated gene (ISG) expression and in combination with MEK inhibitors displayed synergistic effects and induced apoptosis in MEKi-resistant colorectal cancer cell lines. ISG expression was confirmed in patient-derived organoid models, which displayed resistance to trametinib and were resensitized by JQ1 co-treatment. In in vivo models of colorectal cancer, combination treatment significantly suppressed tumor growth. Our findings provide a novel explanation for the limited response to MEK inhibitors in KRAS-mutant colorectal cancer, known for its inflammatory nature. Moreover, the high expression of ISGs was associated with significantly reduced survival of colorectal cancer patients. Excitingly, we have identified novel therapeutic opportunities to overcome intrinsic and acquired resistance to MEK inhibition in colorectal cancer

    ExpExpExplosion: Uniform Interpolation in General EL Terminologies.

    No full text
    Although εL is a popular logic used in large existing knowledge bases, to the best of our knowledge no procedure has yet been proposed that computes uniform εL interpolants of general εL terminologies. Up to now, also the bounds on the size of uniform εL interpolants remain unknown. In this paper, we propose an approach based on proof theory and the theory of formal tree languages to computing a finite uniform interpolant for a general εL terminology if it exists. Further, we show that, if such a finite uniform εL interpolant exists, then there exists one that is at most triple exponential in the size of the original TBox, and that, in the worst-case, no shorter interpolants exist, thereby establishing the triple exponential tight bounds on their size. © 2012 The Author(s)

    {SLD}-Resolution Reduction of Second-Order {H}orn Fragments

    Get PDF
    We present the derivation reduction problem for SLD-resolution, the undecidable problem of finding a finite subset of a set of clauses from which the whole set can be derived using SLD-resolution. We study the reducibility of various fragments of second-order Horn logic with particular applications in Inductive Logic Programming. We also discuss how these results extend to standard resolution

    Relational Models

    No full text
    Relational learning, statistical relational models, statistical relational learning, relational data mining 2 Glossary Entities are (abstract) objects. An actor in a social network can be modelled as an entity. There can be multiple types of entities, entity attributes and relationships between entities. Entities, relationships and attributes are defined in the entity-relationship model, which is used in the design of a formal relational model Relation A relation or relation instance I(R) is a finite set of tuples. A tuple is an ordered list of elements. R is the name or type of the relation. A database instance (or world) is a set of relation instances Predicate A predicate R is a mapping of tuples to true or false. R(tuple) is a ground predicate and is true when tuple ∈ R, otherwise it is false. Note that we do not distinguish between the relation name R and the predicate name

    Back to the Feature: A Neural-Symbolic Perspective on Explainable AI

    No full text
    International audienceWe discuss a perspective aimed at making black box models more eXplainable, within the eXplainable AI (XAI) strand of research. We argue that the traditional end-to-end learning approach used to train Deep Learning (DL) models does not fit the tenets and aims of XAI. Going back to the idea of hand-crafted feature engineering, we suggest a hybrid DL approach to XAI: instead of employing end-to-end learning, we suggest to use DL for the automatic detection of meaningful, hand-crafted high-level symbolic features, which are then to be used by a standard and more interpretable learning model. We exemplify this hybrid learning model in a proof of concept, based on the recently proposed Kandinsky Patterns benchmark, that focuses on the symbolic learning part of the pipeline by using both Logic Tensor Networks and interpretable rule ensembles. After showing that the proposed methodology is able to deliver highly accurate and explainable models, we then discuss potential implementation issues and future directions that can be explored
    corecore